Language models (LMs) have demonstrated remarkable performance on downstream tasks, using in-context exemplars or human instructions. Recent works have shown that chain-of-thought (CoT) prompting can elicit models to solve complex reasoning tasks, step-by-step. However, the efficacy of prompt-based CoT methods is restricted to very large LMs such as GPT-3 (175B), thus limiting deployability. In this paper, we revisit the fine-tuning approach to enable complex reasoning in smaller LMs, optimized to efficiently perform a specific task. We propose Fine-tune-CoT, a method that leverages the capabilities of very large LMs to generate reasoning samples and teach smaller models via fine-tuning. We evaluate our method on publicly available LMs across a wide range of complex tasks and model sizes. We find that Fine-tune-CoT enables substantial reasoning capability in small models, whereas previous prompt-based baselines exhibit near-random performance. Student models can even outperform the teacher in some tasks while reducing model size requirements by several orders of magnitude. We conduct extensive ablations and sample studies to understand the reasoning capabilities of student models. We also identify several important nuances that have been overlooked in concurrent fine-tuning works on CoT and address them in our analysis.
translated by 谷歌翻译
Cell segmentation is a fundamental task for computational biology analysis. Identifying the cell instances is often the first step in various downstream biomedical studies. However, many cell segmentation algorithms, including the recently emerging deep learning-based methods, still show limited generality under the multi-modality environment. Weakly Supervised Cell Segmentation in Multi-modality High-Resolution Microscopy Images was hosted at NeurIPS 2022 to tackle this problem. We propose MEDIAR, a holistic pipeline for cell instance segmentation under multi-modality in this challenge. MEDIAR harmonizes data-centric and model-centric approaches as the learning and inference strategies, achieving a 0.9067 F1-score at the validation phase while satisfying the time budget. To facilitate subsequent research, we provide the source code and trained model as open-source: https://github.com/Lee-Gihun/MEDIAR
translated by 谷歌翻译
The Weather4Cast competition (hosted by NeurIPS 2022) required competitors to predict super-resolution rain movies in various regions of Europe when low-resolution satellite contexts covering wider regions are given. In this paper, we show that a general baseline 3D U-Net can be significantly improved with region-conditioned layers as well as orthogonality regularizations on 1x1x1 convolutional layers. Additionally, we facilitate the generalization with a bag of training strategies: mixup data augmentation, self-distillation, and feature-wise linear modulation (FiLM). Presented modifications outperform the baseline algorithms (3D U-Net) by up to 19.54% with less than 1% additional parameters, which won the 4th place in the core test leaderboard.
translated by 谷歌翻译
Improperly constructed datasets can result in inaccurate inferences. For instance, models trained on biased datasets perform poorly in terms of generalization (i.e., dataset bias). Recent debiasing techniques have successfully achieved generalization performance by underestimating easy-to-learn samples (i.e., bias-aligned samples) and highlighting difficult-to-learn samples (i.e., bias-conflicting samples). However, these techniques may fail owing to noisy labels, because the trained model recognizes noisy labels as difficult-to-learn and thus highlights them. In this study, we find that earlier approaches that used the provided labels to quantify difficulty could be affected by the small proportion of noisy labels. Furthermore, we find that running denoising algorithms before debiasing is ineffective because denoising algorithms reduce the impact of difficult-to-learn samples, including valuable bias-conflicting samples. Therefore, we propose an approach called denoising after entropy-based debiasing, i.e., DENEB, which has three main stages. (1) The prejudice model is trained by emphasizing (bias-aligned, clean) samples, which are selected using a Gaussian Mixture Model. (2) Using the per-sample entropy from the output of the prejudice model, the sampling probability of each sample that is proportional to the entropy is computed. (3) The final model is trained using existing denoising algorithms with the mini-batches constructed by following the computed sampling probability. Compared to existing debiasing and denoising algorithms, our method achieves better debiasing performance on multiple benchmarks.
translated by 谷歌翻译
我们研究了情节块MDP中模型估计和无奖励学习的问题。在这些MDP中,决策者可以访问少数潜在状态产生的丰富观察或上下文。我们首先对基于固定行为策略生成的数据估算潜在状态解码功能(从观测到潜在状态的映射)感兴趣。我们在估计此功能的错误率上得出了信息理论的下限,并提出了接近此基本限制的算法。反过来,我们的算法还提供了MDP的所有组件的估计值。然后,我们研究在无奖励框架中学习近乎最佳政策的问题。根据我们有效的模型估计算法,我们表明我们可以以最佳的速度推断出策略(随着收集样品的数量增长大)的最佳策略。有趣的是,我们的分析提供了必要和充分的条件,在这些条件下,利用块结构可以改善样本复杂性,以识别近乎最佳的策略。当满足这些条件时,Minimax无奖励设置中的样本复杂性将通过乘法因子$ n $提高,其中$ n $是可能的上下文数量。
translated by 谷歌翻译
在本文中,我们提出了一个名为“星际争霸多代理挑战”的新颖基准,代理商学习执行多阶段任务并使用没有精确奖励功能的环境因素。以前的挑战(SMAC)被认为是多名强化学习的标准基准,主要涉及确保所有代理人仅通过具有明显的奖励功能的精细操纵而合作消除接近对手。另一方面,这一挑战对MARL算法的探索能力有效地学习隐式多阶段任务和环境因素以及微控制感兴趣。这项研究涵盖了进攻和防御性场景。在进攻情况下,代理商必须学会先寻找对手,然后消除他们。防御性场景要求代理使用地形特征。例如,代理需要将自己定位在保护结构后面,以使敌人更难攻击。我们研究了SMAC+下的MARL算法,并观察到最近的方法在与以前的挑战类似,但在进攻情况下表现不佳。此外,我们观察到,增强的探索方法对性能有积极影响,但无法完全解决所有情况。这项研究提出了未来研究的新方向。
translated by 谷歌翻译
降水预测是一项重要的科学挑战,对社会产生广泛影响。从历史上看,这项挑战是使用数值天气预测(NWP)模型解决的,该模型基于基于物理的模拟。最近,许多作品提出了一种替代方法,使用端到端深度学习(DL)模型来替代基于物理的NWP。尽管这些DL方法显示出提高的性能和计算效率,但它们在长期预测中表现出局限性,并且缺乏NWP模型的解释性。在这项工作中,我们提出了一个混合NWP-DL工作流程,以填补独立NWP和DL方法之间的空白。在此工作流程下,NWP输出被馈入深层模型,该模型后处理数据以产生精致的降水预测。使用自动气象站(AWS)观测值作为地面真相标签,对深层模型进行了监督训练。这可以实现两全其美,甚至可以从NWP技术的未来改进中受益。为了促进朝这个方向进行研究,我们提出了一个专注于朝鲜半岛的新型数据集,该数据集称为KOMET(KOMEN(KOREA气象数据集),由NWP预测和AWS观察组成。对于NWP,我们使用全局数据同化和预测系统-KOREA集成模型(GDAPS-KIM)。
translated by 谷歌翻译
分布强化学习表明,具有差异和风险的特征,可用于探索的连续和离散控制设置中的最新性能。但是,尽管在分布RL中使用的许多勘探方法采用了每项操作的回报分配方差,但很难找到采用风险财产的勘探方法。在本文中,我们提出了风险调度方法,从风险的角度来看,探索风险水平和乐观行为。我们通过全面的实验在多代理设置中使用风险调度来证明DMIX算法的性能提高。
translated by 谷歌翻译
知识蒸馏(KD)最近成为压缩神经网络的一种流行方法。在最近的研究中,已经提出了同时找到学生模型的参数和体系结构的广义蒸馏方法。尽管如此,这种搜索方法仍需要大量的计算来搜索体系结构,并且缺点是仅考虑其搜索空间中的卷积块。本文介绍了一种新的算法,认为是信任区域意识架构搜索以有效提炼知识(贸易),该算法迅速找到了使用信任区域贝叶斯优化方法从几种最先进的架构中找到有效的学生体系结构。实验结果表明,我们提出的贸易算法始终优于KD培训下的常规NAS方法和预定义的架构。
translated by 谷歌翻译
作为标签噪声,最受欢迎的分布变化之一,严重降低了深度神经网络的概括性能,具有嘈杂标签的强大训练正在成为现代深度学习中的重要任务。在本文中,我们提出了我们的框架,在子分类器(ALASCA)上创造了自适应标签平滑,该框架提供了具有理论保证和可忽略的其他计算的可靠特征提取器。首先,我们得出标签平滑(LS)会产生隐式Lipschitz正则化(LR)。此外,基于这些推导,我们将自适应LS(ALS)应用于子分类器架构上,以在中间层上的自适应LR的实际应用。我们对ALASCA进行了广泛的实验,并将其与以前的几个数据集上的噪声燃烧方法相结合,并显示我们的框架始终优于相应的基线。
translated by 谷歌翻译